On Hair Recognition in the Wild by Machine
نویسندگان
چکیده
We present an algorithm for identity verification using only information from the hair. Face recognition in the wild (i.e., unconstrained settings) is highly useful in a variety of applications, but performance suffers due to many factors, e.g., obscured face, lighting variation, extreme pose angle, and expression. It is well known that humans utilize hair for identification under many of these scenarios due to either the consistent hair appearance of the same subject or obvious hair discrepancy of different subjects, but little work exists to replicate this intelligence artificially. We propose a learned hair matcher using shape, color, and texture features derived from localized patches through an AdaBoost technique with abstaining weak classifiers when features are not present in the given location. The proposed hair matcher achieves 71.53% accuracy on the LFW View 2 dataset. Hair also reduces the error of a Commercial Off-The-Shelf (COTS) face matcher through simple score-level fusion by 5.7%. Introduction For decades machine-based face recognition has been an active topic in artificial intelligence. Generations of researchers have developed a wide variety of face recognition algorithms, starting from Dr. Kanade’s thesis using a shapebased approach (1973), to the appearance-based Eigenface approach (Turk and Pentland 1991), and to the recent approach that integrates powerful feature representation with machine learning techniques (Chen et al. 2013). It is generally agreed that face recognition achieves satisfying performance on the constrained setting, as shown by the MBGC test (Phillips et al. 2009). However, face recognition performance on the unconstrained scenario is still far from ideal and there have been substantial efforts to improve the state of the art, especially demonstrated by the series of developments on the benchmark Labeled Faces in the Wild (LFW) database (Huang et al. 2007). Despite the vast amount of face recognition work, in both constrained and unconstrained scenarios, almost all algorithms rely on the internal parts of the face (e.g., eyes, nose, cheeks, and mouth), and exclude the external parts of the face, such as hair, for recognition. Only very few Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: A hair matcher takes two images and their corresponding segmented hair mask and determines if they belong to the same subject or different subjects. papers discuss the role of hair in face recognition. For example, Chen et al. (2001) show that hair can dominate the classification decision for PCA-type approaches. Yacoob and Davis (2006) quantitatively evaluate the performance of hair-based constrained face recognition. Kumar et al. (2011) present an attribute-based face verification system where hair comprises 10 out of 73 attributes for face matching. So far there is no prior work studying hair for the emerging unconstrained face recognition. This lack of study is partially attributed to the common impression that hair is not stable and can easily change. However, from people around us, we see ordinary people do not change hairstyle often. Hence, out of scientific curiosity, we employ a data-driven approach to study whether hair is indeed discriminative, i.e., we let the data tell us if the common impression is true. In contrast, there is a long history of studying the role of hair in human-based face recognition (Sinha and Poggio 1996; Wright and Sladden 2003; Johnston and Edmonds 2009; Toseeb, Keeble, and Bryant 2012). Intuitively, you notice when a friend cuts their hair or changes their hairstyle. Indeed, Davies, Ellis, and Shepherd (1981) show that hair is the most important single feature for recognizing familiar faces. Sinha and Poggio (1996) combine the internal part of Bill Clinton with the hair and other external features of Al Gore, and the synthesized face appears more similar to Gore. A similar study concludes that hair plays a vital role in human-based face recognition, especially for recognizing faces of your own gender (Wright and Sladden 2003). Driven by both the scientific curiosity, as well as the discrepancy between hair-based human and machine face recognition, this paper aims to study whether, how, and when hair is useful for unconstrained face recognition by a machine. On the other hand, our study is also motivated and enabled by a number of exciting recent research on hair segmentation or labeling (Scheffler, Odobez, and Marconi 2011; Lee et al. 2008; Wang, Ai, and Tang 2012; Kae et al. 2013; Wang et al. 2013), which substantially improve the segmentation accuracy on unconstrained facial images. For instance, the advanced hair labeling approach from Kae et al. (2013) achieves 95% accuracy compared to the ground truth at the superpixel level. The availability of excellent hair segmentation and the needs for improved face recognition enable us to study this interesting topic. Unlike conventional face recognition that uses only the internal parts of two faces, this paper studies how to perform face recognition by using a complementary part, the hair regions, of two faces. We assume that a good quality hair segmentation has been conducted on two images. As shown in Fig. 1, our central task is to design a discriminative feature representation and classifier, termed “hair matcher”, to compute the similarity between two hair regions. Since hair is a highly non-rigid object, we decide to use a local patchbased approach for feature representation. Given two faces aligned by their eye locations, we have a series of rays emitting from the center of two eyes and intersecting with the contour of the hair. We use the hair segmenter from Kae et al. (2013) to identify the hair mask. The intersection points will determine the local patches where we compute a rich set of carefully designed hair features, including Bag of Words (BoW)-based color features, texture features, and hair maskbased shape features. We use a boosting algorithm to select the features from local patches to form a classifier. Our boosting algorithm can abstain at the weak classifier level based on the fact that hair may not be present at some local patches (Schapire and Singer 1999). Our work differs from the prior work of Yacoob and Davis (2006) and Kumar et al. (2011) in three aspects: 1) we focus on unconstrained face recognition with greater hair variations, 2) we employ a local patch driven approach to optimally combine a wide variety of carefully designed features rather than using global descriptors from human intelligence such as split location, length, volume, curls, etc., 3) we explicitly study the fusion of hair with face, which is critical for applying hair recognition to real-world applications. In summary, this paper has a number of contributions: ⇧We develop a hair matcher for unconstrained face recognition and evaluate on the standard LFW database. This is the first known reported result from hair only on this de facto database of unconstrained face recognition. ⇧ We demonstrate that hair and face recognition are uncorrelated and fail under different circumstances which allows for improvement through fusion. ⇧ We use basic fusion techniques with the proposed hair matcher and a COTS face matcher and demonstrate improved face verification performance on the LFW database. Hair Recognition Approach Hair Matcher Framework Given a pair of images I
منابع مشابه
Automatic Face Recognition via Local Directional Patterns
Automatic facial recognition has many potential applications in different areas of humancomputer interaction. However, they are not yet fully realized due to the lack of an effectivefacial feature descriptor. In this paper, we present a new appearance based feature descriptor,the local directional pattern (LDP), to represent facial geometry and analyze its performance inrecognition. An LDP feat...
متن کاملFacial Expression Recognition Based on Anatomical Structure of Human Face
Automatic analysis of human facial expressions is one of the challenging problems in machine vision systems. It has many applications in human-computer interactions such as, social signal processing, social robots, deceit detection, interactive video and behavior monitoring. In this paper, we develop a new method for automatic facial expression recognition based on facial muscle anatomy and hum...
متن کاملP65: Speech Recognition Based on Bbrain Signals by the Quantum Support Vector Machine for Inflammatory Patient ALS
People communicate with each other by exchanging verbal and visual expressions. However, paralyzed patients with various neurological diseases such as amyotrophic lateral sclerosis and cerebral ischemia have difficulties in daily communications because they cannot control their body voluntarily. In this context, brain-computer interface (BCI) has been studied as a tool of communication for thes...
متن کاملIntegration of Color Features and Artificial Neural Networks for In-field Recognition of Saffron Flower
ABSTRACT-Manual harvesting of saffron as a laborious and exhausting job; it not only raises production costs, but also reduces the quality due to contaminations. Saffron quality could be enhanced if automated harvesting is substituted. As the main step towards designing a saffron harvester robot, an appropriate algorithm was developed in this study based on image processing techniques to recogn...
متن کاملFacial Expression Recognition Based on Structural Changes in Facial Skin
Facial expressions are the most powerful and direct means of presenting human emotions and feelings and offer a window into a persons’ state of mind. In recent years, the study of facial expression and recognition has gained prominence; as industry and services are keen on expanding on the potential advantages of facial recognition technology. As machine vision and artificial intelligence advan...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014